Pod 生命周期二

init 模板

init-example.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
spec:
containers:
- name: myapp-container
image: busybox
command: ['sh','-c','echo The app is running! && sleep 3600']
initContainers: # 关键字
- name: init-myservice
image: busybox
command: ['sh','-c','until nslookup myservice; do echo waiting for myservice; sleep 2;done;']
- name: init-mydb
image: busybox
command: ['sh','-c','until nslookup mydb; do echo waiting for mydb; sleep 2; done;']
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
[root@k8s01 ~]# kubectl apply -f init-example.yaml
pod/myapp-pod created

[root@k8s01 ~]# kubectl get pods # init 没成功
NAME READY STATUS RESTARTS AGE
myapp-pod 0/1 Init:0/2 0 13s

[root@k8s01 ~]# kubectl describe pod myapp-pod
# 省略

[root@k8s01 ~]# kubectl logs myapp-pod -c init-myservice

waiting for myservice
Server: 10.96.0.10
Address: 10.96.0.10:53

** server can't find myservice.default.svc.cluster.local: NXDOMAIN

*** Can't find myservice.svc.cluster.local: No answer
*** Can't find myservice.cluster.local: No answer
*** Can't find myservice.default.svc.cluster.local: No answer
*** Can't find myservice.svc.cluster.local: No answer
*** Can't find myservice.cluster.local: No answer

waiting for myservice
## init 未就绪

service-init-example.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
kind: Service
apiVersion: v1
metadata:
name: myservice
spec:
ports:
- protocol: TCP
port: 80
targetPort: 9376

---
kind: Service
apiVersion: v1
metadata:
name: mydb
spec:
ports:
- protocol: TCP
port: 80
targetPort: 9377
1
kubectl apply -f service-init-example.yaml

详细信息

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
[root@k8s01 ~]# kubectl get pods  # 第一个 init 容器启动
NAME READY STATUS RESTARTS AGE
myapp-pod 0/1 Init:1/2 0 2m2s
[root@k8s01 ~]# kubectl get pods # pod内容器启动
NAME READY STATUS RESTARTS AGE
myapp-pod 0/1 PodInitializing 0 2m44s
[root@k8s01 ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
myapp-pod 1/1 Running 0 2m49s
[root@k8s01 ~]# kubectl logs myapp-pod
The app is running!
[root@k8s01 ~]# kubectl describe pod myapp-pod
Name: myapp-pod
Namespace: default
Priority: 0
Node: k8s02/192.168.43.102
Start Time: Wed, 19 Aug 2020 09:05:25 -0400
Labels: app=myapp
Annotations: cni.projectcalico.org/podIP: 172.18.236.154/32
cni.projectcalico.org/podIPs: 172.18.236.154/32
Status: Running
IP: 172.18.236.154
IPs:
IP: 172.18.236.154
Init Containers:
init-myservice:
Container ID: docker://f1aac5b2b50ef5341ba949c72481cc739fbff046faa80113e388983ff438e92a
Image: busybox
Image ID: docker-pullable://busybox@sha256:4f47c01fa91355af2865ac10fef5bf6ec9c7f42ad2321377c21e844427972977
Port: <none>
Host Port: <none>
Command:
sh
-c
until nslookup myservice; do echo waiting for myservice; sleep 2;done;
State: Terminated
Reason: Completed
Exit Code: 0
Started: Wed, 19 Aug 2020 09:05:43 -0400
Finished: Wed, 19 Aug 2020 09:06:58 -0400
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-rr77c (ro)
init-mydb:
Container ID: docker://215168a8daf652ca35b5c3dc74705561d55dc4524c0ca98d6ec214ca3cf0a429
Image: busybox
Image ID: docker-pullable://busybox@sha256:4f47c01fa91355af2865ac10fef5bf6ec9c7f42ad2321377c21e844427972977
Port: <none>
Host Port: <none>
Command:
sh
-c
until nslookup mydb; do echo waiting for mydb; sleep 2; done;
State: Terminated
Reason: Completed
Exit Code: 0
Started: Wed, 19 Aug 2020 09:07:21 -0400
Finished: Wed, 19 Aug 2020 09:07:47 -0400
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-rr77c (ro)
Containers:
myapp-container:
Container ID: docker://76f2df54070bb74c813d45765f0440461a7c006ad85a122fdcb3b07c936b632c
Image: busybox
Image ID: docker-pullable://busybox@sha256:4f47c01fa91355af2865ac10fef5bf6ec9c7f42ad2321377c21e844427972977
Port: <none>
Host Port: <none>
Command:
sh
-c
echo The app is running! && sleep 3600
State: Running
Started: Wed, 19 Aug 2020 09:08:12 -0400
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-rr77c (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-rr77c:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-rr77c
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2m59s default-scheduler Successfully assigned default/myapp-pod to k8s02
Normal Pulling 2m58s kubelet, k8s02 Pulling image "busybox"
Normal Pulled 2m41s kubelet, k8s02 Successfully pulled image "busybox"
Normal Created 2m41s kubelet, k8s02 Created container init-myservice
Normal Started 2m41s kubelet, k8s02 Started container init-myservice
Normal Pulling 86s kubelet, k8s02 Pulling image "busybox"
Normal Pulled 63s kubelet, k8s02 Successfully pulled image "busybox"
Normal Created 63s kubelet, k8s02 Created container init-mydb
Normal Started 63s kubelet, k8s02 Started container init-mydb
Normal Pulling 36s kubelet, k8s02 Pulling image "busybox"
Normal Pulled 13s kubelet, k8s02 Successfully pulled image "busybox"
Normal Created 13s kubelet, k8s02 Created container myapp-container
Normal Started 12s kubelet, k8s02 Started container myapp-container

检测探针 - 就绪检测

readinessProbe-http-get.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
apiVersion: v1
kind: Pod
metadata:
name: readiness-httpget-pod
namespace: default
spec:
containers:
- name: readiness-httpget-container
image: nginx:1.7.9
imagePullPolicy: IfNotPresent
readinessProbe: # 关键字
httpGet:
port: 80
path: /index1.html
initialDelaySeconds: 1 # 触发延时
periodSeconds: 3 # 重试间隔时间

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
[root@k8s01 ~]#  kubectl  describe pod readiness-httpget-pod 
## 省略部分
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 5s default-scheduler Successfully assigned default/readiness-httpget-pod to k8s03
Normal Pulled 4s kubelet, k8s03 Container image "nginx:1.7.9" already present on machine
Normal Created 4s kubelet, k8s03 Created container readiness-httpget-container
Normal Started 3s kubelet, k8s03 Started container readiness-httpget-container
Warning Unhealthy 0s kubelet, k8s03 Readiness probe failed: HTTP probe failed with statuscode: 404
## 探测失败

[root@k8s01 ~]# kubectl exec readiness-httpget-pod -it -- /bin/bash
root@readiness-httpget-pod:/# echo "123" > /usr/share/nginx/html/index1.html
root@readiness-httpget-pod:/# exit
exit
[root@k8s01 ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
myapp-pod 1/1 Running 0 26m
readiness-httpget-pod 1/1 Running 0 3m34s

当有 index1.html 会返回 200 探测成功。日志省略

检测探针 - 存活检测

livenessProbe-exec.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
apiVersion: v1
kind: Pod
metadata:
name: liveness-exec-pod
namespace: default
spec:
containers:
- name: liveness-exec-container
image: busybox:1.32.0
imagePullPolicy: IfNotPresent
command: ["/bin/sh","-c","touch /tmp/live ; sleep 60; rm -rf /tmp/live; sleep 3600"]
livenessProbe:
exec:
command: ["test","-e","/tmp/live"]
initialDelaySeconds: 1
periodSeconds: 3

1
2
3
4
5
[root@k8s01 ~]# kubectl get pod  ## liveness-exec-pod  会 周期性 restart 
NAME READY STATUS RESTARTS AGE
liveness-exec-pod 0/1 CrashLoopBackOff 6 10m
myapp-pod 1/1 Running 0 7h31m
readiness-httpget-pod 1/1 Running 0 7h1m

livenessProbe-httpget

livenessProbe-httpget.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
apiVersion: v1
kind: Pod
metadata:
name: liveness-httpget-pod
namespace: default
spec:
containers:
- name: liveness-httpget-container
image: nginx:1.7.9
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 80
livenessProbe:
httpGet:
port: 80
path: /index.html
initialDelaySeconds: 1
periodSeconds: 3
timeoutSeconds: 10

1
2
3
4
5
6
7
8
9
[root@k8s01 ~]# kubectl exec liveness-httpget-pod -it -- /bin/bash
## 修改 /index.html 为 index1.html

[root@k8s01 ~]# kubectl get pod ## liveness-httpget-pod 会 restart
NAME READY STATUS RESTARTS AGE
liveness-exec-pod 0/1 CrashLoopBackOff 6 12m
liveness-httpget-pod 1/1 Running 1 2m59s
myapp-pod 1/1 Running 0 7h31m
readiness-httpget-pod 1/1 Running 0 7h1m

livenessProbe-tcp

livenessProbe-tcp.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
apiVersion: v1
kind: Pod
metadata:
name: probe-tcp
spec:
containers:
- name: nginx
image: nginx:1.7.9
livenessProbe:
initialDelaySeconds: 5
timeoutSeconds: 1
tcpSocket:
port: 80
periodSeconds: 3

livenessProbe-tcp-81.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
apiVersion: v1
kind: Pod
metadata:
name: probe-tcp1
spec:
containers:
- name: nginx
image: nginx:1.7.9
livenessProbe:
initialDelaySeconds: 5
timeoutSeconds: 1
tcpSocket:
port: 81
periodSeconds: 3
1
2
3
4
5
6
7
8
9
10
11

[root@k8s01 ~]# kubectl get pod # tcp 80 端口可以检测到,81端口一直重启
NAME READY STATUS RESTARTS AGE
probe-tcp80 1/1 Running 0 19s
probe-tcp81 0/1 CrashLoopBackOff 4 2m48s


[root@k8s01 ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
probe-tcp80 1/1 Running 0 74s
probe-tcp81 1/1 Running 6 3m43s

查看日志

1
2
3
4
5
6
7
8
9
10
11
# 节选 , Liveness probe failed: dial tcp 172.18.235.163:81: connect: connection refused
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned default/probe-tcp81 to k8s03
Normal Created 6h39m (x4 over 6h41m) kubelet, k8s03 Created container nginx
Normal Started 6h39m (x4 over 6h41m) kubelet, k8s03 Started container nginx
Normal Killing 6h39m (x3 over 6h40m) kubelet, k8s03 Container nginx failed liveness probe, will be restarted
Normal Pulled 6h39m (x4 over 6h41m) kubelet, k8s03 Container image "nginx:1.7.9" already present on machine
Warning Unhealthy 6h39m (x10 over 6h41m) kubelet, k8s03 Liveness probe failed: dial tcp 172.18.235.163:81: connect: connection refused
Warning BackOff 6h36m (x10 over 6h38m) kubelet, k8s03 Back-off restarting failed container

删除所有 pod

1
kubectl delete pod --all

存活+就绪 检测 + startup 检测(1.16 版本)

liveness-readiness.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
apiVersion: v1
kind: Pod
metadata:
name: liveness-readiness-pod
namespace: default
spec:
containers:
- name: liveness-readiness-container
image: nginx:1.7.9
imagePullPolicy: IfNotPresent
readinessProbe: # 关键字
httpGet:
port: 80
path: /index1.html
initialDelaySeconds: 1 # 触发延时
periodSeconds: 3 # 重试间隔时间
livenessProbe:
httpGet:
port: http
path: /index.html
initialDelaySeconds: 1
periodSeconds: 3
timeoutSeconds: 10
startupProbe: # 最先进行 startupProbe,3次没通过就重启,通过后 进行 另外两个检测
httpGet:
path: /index.html
port: 80
failureThreshold: 3 #失败阈值
periodSeconds: 2

一旦容器通过了 startupProbe 后,Kubelet 会每隔 3(定义的时间) 秒钟进行一次探活检测 (livenessProbe),每隔 10(定义的时间) 秒进行一次就绪检测(readinessProbe)。

启动、退出动作

start_stop.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
apiVersion: v1
kind: Pod
metadata:
name: lifecycle-demo
spec:
containers:
- name: lifecycle-demo-container
image: nginx
lifecycle:
postStart: # 启动动作
exec:
command: ["/bin/sh", "-c", "echo Hello from the postStart handler >/usr/share/message"]
preStop: # 退出动作
exec:
command: ["/bin/sh", "-c", "echo Hello from the poststop handler >/usr/share/message"]

查看显示

1
2
3
4
[root@k8s01 ~]# kubectl exec lifecycle-demo -it -- /bin/bash 

root@lifecycle-demo:/# cat /usr/share/message
Hello from the postStart handler # start 成功